November 23, 2020

Just days before the election, my final forecast went against the wisdom of professional forecasters and pollsters alike and projected a rail-thin electoral margin for Joe Biden. While the election results surprised many people on the night of November 3, my model’s point prediction anticipated an even closer race in the electoral college–273 electoral votes for Biden compared to his actual 306–but a wider spread in the popular vote–52.8% compared to his actual 51.9%.

Accuracy and Patterns

The statistical aphorism that “all models are wrong, but some are useful” served as my guiding philosophy in constructing this model. As I discussed in my final prediction, I did not expect this model to perfectly forecast all outcomes in the election. Rather, this forecast aimed to provide a range of state-level probabilities and outcomes. Then, I used the most probable state-level outcomes to produce point predictions for the Electoral College and national popular vote. While these numbers could be interpreted as my “final prediction”, I would have been incredibly shocked if my point predictions perfectly matched the election outcome since my election simulations yielded a fair amount of uncertainty.

All in all, I’m quite happy with how this model paralleled with the election outcomes. It only misclassified the winner of GA, NV, and AZ, which were three of the final states called. Even though the model predicted that a Donald Trump victory was more likely in these states, the forecast predicted a close race in those states and gave either candidate a fair shot of winning–Joe Biden won GA, NV, and AZ in 19.2%, 43.9%, and 20.5% of simulations, respectively.

The actual Electoral College outcome, with each candidate winning the states that they ultimately won in the election, occurred in 53, or 0.001, of my simulations. For additional context, my exact point prediction occurred in 5080 of my simulations, which only equates to 0.051% of my simulations. Using a frequentist1 approach to probability, my forecast could have generate the exact probabilities and we just happened to observe one of the 53 elections where each candidate won this exact cocktail of states.

Forecasters cannot predict the election outcome with absolute certainty, but models provide a range of possible scenarios. This model successfully anticipated a close Electoral Race with a large popular vote margin, and the actual outcome occurred more than a handful of times in my simulations.

With a correlation of 0.961 between the actual and the predicted two-party popular vote for each state, there is an incredibly strong correlation between the actual and predicted state-level two-party vote shares. With that said, there are a few patterns in the inaccuracies:

The below maps illustrate the areas with the greatest error. Notice that safe blue and red states such as New York and Louisiana have relatively large errors, while battleground states such as Texas and Ohio have extremely slim errors. For a closer look at the data, the included table contains all of the actual and predicted two-party vote shares for Joe Biden, ordered by the magnitude of the error:

Predicted Actual Results

Error Map

State Actual Democratic Two-Party Vote Share Predicted Democratic Two-Party Vote Share Error
NY 57.43412 69.61151 -12.1773879
RI 60.50962 69.52752 -9.0179003
HI 65.03266 72.32903 -7.2963701
LA 40.53556 33.93166 6.6039010
SC 44.07339 37.81854 6.2548501
DE 59.62674 65.87647 -6.2497293
AR 35.79245 29.62813 6.1643211
AK 44.76601 40.02248 4.7435307
CA 65.04771 69.50769 -4.4599811
CT 60.17662 64.60562 -4.4289997
MS 41.62040 37.20396 4.4164364
NJ 58.14243 62.50579 -4.3633523
WA 59.94931 64.18870 -4.2393888
ND 32.78259 36.80377 -4.0211801
OR 58.29672 62.25845 -3.9617291
MA 66.86069 70.77922 -3.9185289
NE 40.24716 44.02699 -3.7798311
WV 30.20202 33.80620 -3.6041767
KS 42.25143 38.65813 3.5932950
MN 53.63371 50.05711 3.5765990
GA 50.13742 47.01611 3.1213053
ME 55.12922 52.09217 3.0370501
SD 36.56522 39.42158 -2.8563608
AZ 50.15683 47.34474 2.8120875
MT 41.60282 38.79975 2.8030690
MO 42.16738 39.50587 2.6615088
AL 37.03289 34.62090 2.4119937
KY 36.79758 34.47128 2.3263011
IN 41.79340 39.56968 2.2237237
VA 55.15249 57.35345 -2.2009641
NV 51.22312 49.36777 1.8553463
TN 38.11647 36.32844 1.7880338
NM 55.51569 53.81917 1.6965192
CO 56.93974 58.48604 -1.5463005
UT 39.30639 37.82175 1.4846465
NC 49.31589 48.10486 1.2110359
IA 45.81652 46.91440 -1.0978824
VT 68.29919 67.26910 1.0300846
IL 58.61684 59.62805 -1.0112123
MI 51.44563 50.53204 0.9135949
NH 53.74888 53.06014 0.6887330
WY 27.51957 26.85758 0.6619846
FL 48.30525 48.95294 -0.6476975
TX 47.17236 46.69227 0.4800943
OK 33.05996 32.60532 0.4546372
MD 66.75430 67.16643 -0.4121362
ID 34.12328 33.75413 0.3691501
PA 50.60213 50.68815 -0.0860200
OH 45.95189 45.99202 -0.0401299
WI 50.31728 50.35326 -0.0359886

Since this model was not unilaterally biased like most other forecast models, this model’s average error is considerably closer to zero than other popular forecasts, and the errors are more normally distributed around zero:

Comparison Summary Statistics

Model Mean Error Root Mean Squared Error Classification Accuracy Missed States
Kayla Manning -0.2413883 3.883678 94 AZ, GA, NV
The Economist -2.3310087 2.803927 96 FL, NC
FiveThirtyEight -2.4447961 3.019431 96 FL, NC

Error Histograms

Hypotheses for Inaccuracies: Partisan Shifts in Voting and Voter Registration

As with any forecast model that incorporated polls, this forecast would have benefited from improved polling accuracy. Unfortunately, I do not control the polling methodology, so I must improve my model in other ways. In an effort to minimize the impact of biased polls, I applied an aggressive weighting scheme based on FiveThirtyEight’s pollster grades. Despite of these efforts, the model still produced extreme predictions in either direction, with a more favorable predictions for Biden in the liberal states and more favorable predictions for Trump in conservative states. Since this model was not unilaterally biased, it leads to me believe that perhaps this model did not pick up on states trending towards purple.

One hypothesis for the polarized predictions is that this model lacked a variable that directly considers the partisan trends within the state. This model neglected to pick up on the magnitude of changing views in states such as Arizona and Georgia, both of which voted for Trump in 2016 yet voted for Biden in 2020 and were misclassified by this model.2 In addition to Georgia and Arizona, New York–the state with the largest prediction error–also followed the momentum of a 2016 partisan shift toward the center:

To account for this in 2024 and beyond, I could include a variable that captures shifting partisanship within a state between elections. In this model, I attempted to use demographic changes as a proxy for this, but a more direct variable might work better. I plan on incorporating a “difference in Democratic vote share” variable in future iterations of this model, which looks at the difference in the share of that state’s two-party popular vote in the previous two elections.

This forecast also does not include changes in party registration, which would also pick up on changes in partisanship within a state. More recent data is available for party registration, which means the model could look at 2020 data. To include voter registration in the models, I could take the change in Democratic voters’ percentage of the electorate from the previous election year. For example, if Democrats comprised 40% of Texas registered voters in 2016 and 45% of Texas registered voters in 2020, then the difference would be \(0.45 - 0.40 = 0.05\).

Proposed Test to Assess Hypotheses: Assess New Models with Additional Variables

To assess the partisan shift hypothesis, I could reconstruct the model, following the same procedures as outlined in my final prediction. I would use the same data from 1992-2016 and include the new variable that captures the state-level changes in voting patterns between elections. Once I have constructed this model, I would follow a series of steps to assess its validity:

  1. First and foremost, I would assess the statistical significance of the partisan change coefficients for each of the pooled models.
  2. Then, I would assess the out-of-sample fit with a leave-one-out cross-validation and compare the classification accuracy to my official 2020 forecast.
  3. If both of those steps support the strength of this new model, I would forecast the 2020 results using this year’s data.3
  4. Finally, I would compare this model’s 2020 forecast to my previous model.

These assessments should provide enough metrics to determine which model performs better in- and out-of-sample. If this new model provided more accurate 2020 predictions and performed better when assessing in- and out-of-sample fit, then I know that some inaccuracies of my official model stemmed from the absence of a variable to capture partisan change between elections. However, if my previous model performed better or approximately the same, then I would stick with my original, more parsimonious model for the future.

If I wish to assess the validity of the voter registration hypothesis, I would follow the exact same steps as above, but with the variable that captures the change in Democratic voter registration. If both of these variables fare better than the original model, I would assess the strength of an additional model that contains both partisan shifts in voting and voter registration, following the same process.

Improvements for Future Iterations

Aside from the lack of a variable to capture shifting partisan alignment within states, I also plan to make several methodological changes to this model for the future. I touched on many of these in greater detail in my final prediction post, but here is a brief overview:

  • This model does not include Washington D.C., so I manually added its 3 electoral votes after forecasting the vote shares for the 50 states. Ideally, I would find the necessary data to include D.C. in my forecast.
  • Also due to the absence of appropriate data, this model allocates the electoral votes from Maine and Nebraska on a winner-take-all basis rather than following the congressional district method, as they do in reality. Again, future iterations would ideally include district-level data for these states.
  • I need to improve my methodology for varying voter turnout and probabilities:
    • This model varied voter turnout and partisan probabilities independently by simply drawing from a normal distribution. A more sophisticated model in the future would introduce some correlation between geographies, demographic groups, and ideologies.
    • Moreover, since I drew these probabilities from a normal distribution, some states could have negative probabilities if the initial probability for voting for a particular party was extremely low (e.g. voting Republican in Hawaii). I took the absolute value of these draws to ensure all probabilities were positive, but this introduced some extreme variation in states that had an extremely low probability of voting for one party. For example, the confidence intervals for Republican votes in Hawaii were unrealistically wide since the negative vote probabilities became positive probabilities of a larger magnitude than what was actually likely in the normal distribution. I must find a better method to restrict the domain in future iterations of this model.
  • Lastly, I classified states based on their 2020 ideologies. In the future, I would like to set a rule for classifying each state for every election, rather than relying solely on the 2020 classification by the New York Times. For example, this model considered Colorado as a “blue state” for all years based on its 2020 classification, but it was either a “red state” or “battleground state” in most of the previous elections in the data. In its current condition, the “blue state” model was constructed with all Colorado data from 1992-2018. Ideally, I would use Colorado data from the years it was considered a “battleground state” to construct the “battleground” model, the years it was a “red state” to construct the “red” model, and the years it was a “blue state” to construct the “blue” model.

Conclusion

While my forecast failed to predict the election outcomes with absolute precision, this model correctly projected a relatively close race in the Electoral College with a larger margin in the popular vote. Furthermore, the outcomes of November 3 all reasonably match the probabilities assigned by the model. Even in GA, NV, and AZ–the three misclassified states–the actual vote shares were not too far from the predictions, and the simulations gave both candidates a fair probability of winning all three of those states. Despite having predicted this election exceptionally well when many models did not, future iterations of this model must do a better job at accounting for partisan shifts within states.



  1. Unlike rolling dice, we cannot experience multiple occurrences of the same election to uncover the true probability of each event. Frequentist probability describes the relative frequency of an event in many trials; conducting many simulations in my model took a frequentist approach to uncover the probability of each outcome. However, we can never really know if any of the probabilities were correct because the 2020 election only happened once (thank goodness!). Trying to say whether or not a probabilistic forecast was correct is like rolling a “six” on a single die and concluding that your prior probabilities of 1/6 for rolling a 6 and 5/6 for rolling anything else were incorrect because you observed the less probable outcome on a single iteration.↩︎

  2. However, any changes would have to keep in mind that FL, OH, WI, etc. were more conservative than most forecasts anticipated, and this model correctly anticipated the winner in these highly contentious battleground states.↩︎

  3. To remain consistent with my final forecast, I would not use polls from after 3 PM EST on November 1, which is the last time I used FiveThirtyEight’s state-level polling data for my original model.↩︎